17 research outputs found

    OPTIMIZING SERVER CONSOLIDATION FOR ENTERPRISE APPLICATION SERVICE PROVIDERS

    Get PDF
    In enterprise application environments, hardware resources show averagely low utilization rates due to a provisioning practice that is based on peak demands. Therefore, the consolidation of orthogonal workloads can improve energy efficiency and reduce total cost of ownership. In this paper, we address existing workload consolidation potential by solving a bin packing problem, where the number of servers is to be minimized. Since dynamic workloads, gathered from historical traces, and priorities of running services are considered, we formulate the Dynamic Priority-based Workload Consolidation Problem (DPWCP) and develop solution algorithms using heuristics and metaheuristics. Relevance is pointed out by an analysis of service resource demands and server capacities across four studied cases from productively operating enterprise application service providers. After a classification of related work, seven algorithms were developed and evaluated regarding their exploited optimization potential and computing time. Best results were achieved by a best-fit approach that uses a genetic algorithm to optimize its input sequence (GA_BF). When applying the GA_BF onto the four studied cases, average utilization rates could be increased from 23 to 63 percent within an average computing time of 22.5 seconds. Therefore, the overall server capacity was reduced significantly by up to 83%

    Integrated Optimization of IT Service Performance and Availability Using Performability Prediction Models

    Get PDF
    Optimizing the performance and availability of an IT service in the design stage are typically considered as independent tasks. However, since both aspects are related to one another, these activities could be combined by applying performability models, in which both the performance and the availability of a service can be more accurately predicted. In this paper, a design optimization problem for IT services is defined and applied in two scenarios, one of which considers a mechanism in which redundant components can be used both for failover as well as handling overload situations. Results show that including such aspects affecting both availability and performance in prediction models can lead to more cost-effective service designs. Thus, performability prediction models are one opportunity to combine performance and availability management for IT services

    PREDICTING AVAILABILITY AND RESPONSE TIMES OF IT SERVICES

    Get PDF
    When IT service providers adapt their IT system landscapes because of new technologies or changing business requirements, the effects of changes to the quality of service must be considered to fulfill service level agreements. Analytical prediction models can support this process in the service design stages, but dependencies between quality aspects are not taken into account. In this paper, a novel approach for predicting availability and response time of an IT service is developed, which is simulation-based to support dynamic analysis of service quality. The correctness of the model as well as its applicability in a real case can be evaluated. Therefore, this work presents a step towards an analytical framework for predicting IT service quality aspects

    Analyzing the Effects of Load Distribution Algorithms on Energy Consumption of Servers in Cloud Data Centers

    Get PDF
    Cloud computing has become an important driver for IT service provisioning in recent years. It offers additional flexibility to both customers and IT service providers, but also comes along with new challenges for providers. One of the major challenges for providers is the reduction of energy consumption since today, already more than 50% of operational costs in data centers account for energy. A possible way to reduce these costs is to efficiently distribute load within the data center. Although the effect of load distribution algorithms on energy consumption is a topic of recent research, an analysis-framework for evaluating arbitrary load distribution algorithms with regard to their effects on the energy consumption of cloud data centers is still nonexistent. Therefore, in this contribution, a concept of a simulation-based, quantitative analysis-framework for load distribution algorithms in cloud environments with respect to the energy consumption of data centers is developed and evaluated

    Collaborative Software Performance Engineering for Enterprise Applications

    Get PDF
    In the domain of enterprise applications, organizations usually implement third-party standard software components in order to save costs. Hence, application performance monitoring activities constantly produce log entries that are comparable to a certain extent, holding the potential for valuable collaboration across organizational borders. Taking advantage of this fact, we propose a collaborative knowledge base, aimed to support decisions of performance engineering activities, carried out during early design phases of planned enterprise applications. To verify our assumption of cross-organizational comparability, machine learning algorithms were trained on monitoring logs of 18,927 standard application instances productively running at different organizations around the globe. Using random forests, we were able to predict the mean response time for selected standard business transactions with a mean relative error of 23.19 percent. Hence, the approach combines benefits of existing measurement-based and model-based performance prediction techniques, leading to competitive advantages, enabled by inter-organizational collaboration

    User Story: Besuchernachweis im Covid-19-Kontext

    Get PDF
    Die Corona-Pandemie führte in Bibliotheken nicht nur zu massiven Umstellungen hinsichtlich digitaler Angebote, auch der Publikumsverkehr ist im Zuge der Wiedereröffnung durch Regelungen neugestaltet. In diesem Zusammenhang müssen einerseits ministerielle Vorgaben zur Nachverfolgung von potentiellen Kontakten beachtet werden und andererseits datenschutzrechtliche Belange. Auch die unkomplizierte Nutzung und zugleich schnelle Erfassung spielen eine hohe Priorität im Kontext eines kontaktarmen, aber durchsatzstarken Betriebes. An der Universitätsbibliothek Magdeburg wurde hier eine Lösung entwickelt, die viele Anforderungen ohne zusätzlichen Ressourceneinsatz abdeckt und zugleich zur Nachnutzung in anderen Einrichtungen zur Verfügung steht. Im vorliegenden Beitrag werden die Anforderungen kurz dargestellt, die im agilen Projektmanagement entwickelte Software beschrieben und Potentiale für eine Nach- und Weiternutzung aufgrund der ersten Betriebserfahrungen aufgezeigt.The corona pandemic not only led to massive changes in libraries with regard to digital services, but also in the public business, which has been reorganized by regulations for the reopening. On the one hand, ministerial guidelines for tracking potential contacts must be observed and, on the other hand, data protection issues have to be considered. Furthermore, an uncomplicated use and at the same time fast data registration play a high priority in the context of a low-contact but high-throughput operation. At the Magdeburg University Library, a solution was developed that covers many requirements without the need for additional resources and is also available for subsequent use in other institutions. In this paper, the requirements are briefly described, the software, developed in agile project management, is presented and potentials for a subsequent use based on the first operational experiences are shown

    Predicting an IT Service’s Availability with Respect to Operator Errors

    No full text
    Ensuring high availability of IT services is crucial for IT service providers, not least because of the cloud computing para-digm. Thus, the management of IT system landscapes should be supported by availability predictions. Although operator errors account for a high number of service downtimes, only few approaches for quantitative availability prediction consider operator actions and errors. Also mechanisms that were developed to face the high number of operator errors cannot be modeled with existing approaches. Therefore, a new design for availability prediction of IT services is developed. Based on a flexible petri nets’ simulation, operator errors and mechanisms to face them are introduced. An evaluation shows that the developed design is correct and results are plausible

    Multidimensional Workload Consolidation for Enterprise Application Service Providers

    No full text
    In the domain of enterprise applications, operational costs can be reduced by consolidating orthogonal workloads with the objective of maximizing server utilization levels and minimizing the total amount of required capacity. This is closely related to the well-known bin packing problem which is NP-hard. Related problem formulations often consider varying historical workload traces, but include only one resource dimension, usually the CPU. This implicates a serious risk of overloading other resources that are not related to CPU demands, such as memory. Therefore, we formulate the multidimensional workload consolidation problem and develop eight algorithms to provide solutions. We evaluate their applicability using workload traces gathered from four data centers. A best-fit heuristic that uses a genetic algorithm provides best solution qualities with lowest variance and revealed up to 53.39 percent of unused capacity. In general, multidimensional workload consolidation problems eliminate less server capacity, but effectively reduce the risk of resource overloads

    Vorhersagemodell für die Verfügbarkeit von IT-Services aus Anwendungssystemlandschaften

    No full text
    Serviceorientierung und Cloud Computing haben dazu geführt, dass sich inzwischen immer mehr Unternehmen darauf spezialisieren, IT-Dienstleistungen für Organisationen anzubieten. Erhöhter Wettbewerb in dieser Branche führt zu Kosten- und Qualitätsdruck, was eine bessere Beherrschung des damit einhergehenden Leistungserstellungsprozesses durch quantifizierbare Vorhersagen für Anpassungen an den Anwendungssystemlandschaften der Service-Anbieter wünschenswert macht. In diesem Beitrag wird ein neuer Ansatz entwickelt, um die Verfügbarkeit von IT-Services aus Anwendungssystemlandschaften vorherzusagen. Entgegen bisher bekannter Lösungen geht dieser Ansatz nicht davon aus, dass die einzelnen Systemkomponenten unabhängig voneinander ausfallen. Es wird gezeigt, dass der Lösungsansatz ohne diese Abhängigkeiten die Ergebnisse analytischer Methoden erreicht. Somit können Entscheidungsprozesse unterstützt werden

    Capacity Management as a Service for Enterprise Standard Software

    Get PDF
    Capacity management approaches optimize component utilization from a strong technical perspective. In fact, the quality of involved services is considered implicitly by linking it to resource capacity values. This practice hinders to evaluate design alternatives with respect to given service levels that are expressed in user-centric metrics such as the mean response time for a business transaction. We argue that utilized historical workload traces often contain a variety of performance-related information that allows for the integration of performance prediction techniques through machine learning. Since enterprise applications excessively make use of standard software that is shipped by large software vendors to a wide range of customers, standardized prediction models can be trained and provisioned as part of a capacity management service which we propose in this article. Therefore, we integrate knowledge discovery activities into well-known capacity planning steps, which we adapt to the special characteristics of enterprise applications. Using a real-world example, we demonstrate how prediction models that were trained on a large scale of monitoring data enable cost-efficient measurement-based prediction techniques to be used in early design and redesign phases of planned or running applications. Finally, based on the trained model, we demonstrate how to simulate and analyze future workload scenarios. Using a Pareto approach, we were able to identify cost-effective design alternatives for an enterprise application whose capacity is being managed
    corecore